Annals of Emerging Technologies in Computing (AETiC)

 
Table of Contents

·         Table of Contents (Volume #5, Issue #2)


 
Cover Page

·         Cover Page (Volume #5, Issue #2)


 
Editorial

·         Editorial (Volume #5, Issue #2)


 
Paper #1                                                                             

Effective Performance Metrics for Multimedia Mission-critical Communication Systems.

Ashraf Ali and Andrew Ware


Abstract: Mission-critical Communication Systems that are adaptable for use with the latest generation of multimedia services are crucial for system users. To determine the set of requirements that need to be hardcoded into such systems, a clear distinction between mission-critical and non-mission-critical systems is required. Moreover, the users of services provided by such systems are very different to those of current mobile commercial communication systems. These differences give rise to a set of challenges that need addressing to facilitate migration from existing systems to those now being proposed. One such challenge relates to the performance of the IP Multimedia Subsystem (IMS) registration process. This is a crucial consideration for mission-critical systems, particularly in large-scale systems where thousands or even millions of users may seek to access the system in disaster scenarios. This paper presents an evaluation of IMS and Session Initiation Protocol (SIP) performance metrics and Key Performance Indicators (KPIs). Moreover, it articulates a proposed study that will seek to address some of the challenges identified.


Keywords: IMS; IP Multimedia Subsystems; Key Performance Indicators; Long Term Evolution; LTE; Performance Metrics; Session Initiation Protocol; SIP.


Download Full Text


 
Paper #2                                                                             

Incremental Search for Informative Gene Selection in Cancer Classification

Fathima Fajila and Yuhanis Yusof


Abstract: Although numerous methods of using microarray data analysis for classification have been reported, there is space in the field of cancer classification for new inventions in terms of informative gene selection. This study introduces a new incremental search-based gene selection approach for cancer classification. The strength of wrappers in determining relevant genes in a gene pool can be increased as they evaluate each possible gene’s subset. Nevertheless, the searching algorithms play a major role in gene’s subset selection. Hence, there is the possibility of finding more informative genes with incremental application. Thus, we introduce an approach which utilizes two searching algorithms in gene’s subset selection. The approach was efficient enough to classify five out of six microarray datasets with 100% accuracy using only a few biomarkers while the rest classified with only one misclassification.


Keywords: Cancer classification; Gene’s subset; Informative gene; Microarray; Wrappers.


Download Full Text


 
Paper #3                                                                             

Development and Evaluation of Blockchain based Secure Application for Verification and Validation of Academic Certificates

Elva Leka and Besnik Selimi


Abstract: Academic degrees are subject to corruptions, system flaws, forgeries, and imitations. In this paper we propose to develop a blockchain smart contract-based application using Ethereum Platform, to store, distribute and verify academic certificates. It constitutes a trusted, decentralized certificate’s management system that can offer a unified viewpoint for students, academic institutions, as well as for other potential stakeholders such as employers. The article describes the implementation of three main parts of our proposed solution that includes: verification application, university interface and accreditor interface. This application avoids administrative barriers, makes the process of deployment, verification, and validation of certificates faster, efficient, and more secure. Additionally, it offers confidentiality of the data by using AES encryption algorithm before creating transactions and allows bulk submission of multiple academic certificates.


Keywords: Blockchain; Ethereum; Smart Contract; Test Environment; Verification Application.


Download Full Text


 
Paper #4                                                                             

Remarks on the Behavior of an Agent-Based Model of Spatial Distribution of Species

João Bioco, Paula Prata, Fernando Cánovas and Paulo Fazendeiro


Abstract: Agent-based models have gained considerable notoriety in ecological modeling as well as in several other fields yearning for the ability to capture the emergent behavior of a complex system in which individuals interact with each other and with their environment. These models are implemented by applying a bottom-up approach, where the entire behavior of the system emerges from the local interaction between their components (agents or individuals). Usually, these interactions between individuals and their enclosing environment are modeled by very simple local rules. From the conceptual point of view, another appealing characteristic of this simulation approach is that it is well aligned with the reality whenever the system is composed of a multitude of individuals (behavioral units) that can be flexibly combined and placed in the environment. Due to their inherent flexibility, and despite of their simplicity, it is necessary to pay attention to the adjustments in their parameters which may result in unforeseen changes on the overall behavior of these models. In this paper we study the behavior of an agent-based model of spatial distribution of species, by analyzing the effects of the model parameters and the implications of the environment variables (that compose the environment where the species lives) on the models’ output. The presented experiments show that the behavior of the model depends mainly on the conditions of the environment where the species live, and the main parameters presented in life cycle of the species.


Keywords: Agent-based modeling; (Biological) Spatial distribution; Computer simulation; Discrete agents, Parameterization.


Download Full Text


 
Paper #5                                                                             

FPGA Implementations of Algorithms for Preprocessing of High Frame Rate and High Resolution Image Streams in Real Time

Uroš Hudomalj, Christopher Mandla and Markus Plattner


Abstract: This paper presents FPGA implementations of image filtering and image averaging – two widely applied image preprocessing algorithms. The implementations are targeted for real time processing of high frame rate and high resolution image streams. The developed implementations are evaluated in terms of resource usage, power consumption, and achievable frame rates. For the evaluation, Microsemi’s Smartfusion2 Advanced Development Kit is used. It includes a SmartFusion2 M2S150 SoC FPGA. The performance of the developed implementation of image filtering algorithm is compared to a solution provided by MATLAB’s Vision HDL Toolbox, which is evaluated on the same platform. The performance of the developed implementations are also compared with FPGA implementations found in existing publications, although those are evaluated on different FPGA platforms. Difficulties with performance comparison between implementations on different platforms are addressed and limitations of processing image streams with FPGA platforms discussed.


Keywords: image processing; real time image processing; FPGA; Camera Link; resource usage; power consumption; frame rate; processing data rate.


Download Full Text


 
Paper #6                                                                             

Inherent Parallelism and Speedup Estimation of Sequential Programs

Sesha Kalyur and Nagaraja G.S


Abstract: Although several automated Parallel Conversion solutions are available, very few have attempted, to provide proper estimates of the available Inherent Parallelism and expected Parallel Speedup. CALIPER which is the outcome of this research work is a parallel performance estimation technology that can fill this void. High level language structures such as Functions, Loops, Conditions, etc which ease program development, can be a hindrance for effective performance analysis. We refer to these program structures as the Program Shape. As a preparatory step, CALIPER attempts to remove these shape related hindrances, an activity we refer to as Program Shape Flattening. Programs are also characterized by dependences that exist between different instructions and impose an upper limit on the parallel conversion gains. For parallel estimation, we first group instructions that share dependences, and add them to a class we refer to as Dependence Class or Parallel Class. While instructions belonging to a class run sequentially, the classes themselves run in parallel. Parallel runtime, is now the runtime of the class that runs the longest. We report performance estimates of parallel conversion as two metrics. The inherent parallelism in the program is reported, as Maximum Available Parallelism (MAP) and the speedup after conversion as Speedup After Parallelization (SAP).


Keywords: Estimation; Parallel; Performance; Prediction; MAP; SAP.


Download Full Text


 
Paper #7                                                                             

Detection of Lung Nodules on CT Images based on the Convolutional Neural Network with Attention Mechanism

Khai Dinh Lai, Thuy Thanh Nguyen and Thai Hoang Le


Abstract: The development of Computer-aided diagnosis (CAD) systems for automatic lung nodule detection through thoracic computed tomography (CT) scans has been an active area of research in recent years. Lung Nodule Analysis 2016 (LUNA16 challenge) encourages researchers to suggest a variety of successful nodule detection algorithms based on two key stages (1) candidates detection, (2) false-positive reduction. In the scope of this paper, a new convolutional neural network (CNN) architecture is proposed to efficiently solve the second challenge of LUNA16. Specifically, we find that typical CNN models pay little attention to the characteristics of input data, in order to address this constraint, we apply the attention-mechanism: propose a technique to attach Squeeze and Excitation-Block (SE-Block) after each convolution layer of CNN to emphasize important feature maps related to the characteristics of the input image - forming Attention sub-Convnet. The new CNN architecture is suggested by connecting the Attention sub-Convnets. In addition, we also analyze the selection of triplet loss or softmax loss functions to boost the rating performance of the proposed CNN. From the study, this is agreed to select softmax loss during the CNN training phase and triplet loss for the testing phase. Our suggested CNN is used to minimize the number of redundant candidates in order to improve the efficiency of false-positive reduction with the LUNA database. The results obtained in comparison to the previous models indicate the feasibility of the proposed model.


Keywords: Attention convolutional network; triplet loss; nodules detection; false-positive reduction.


Download Full Text


 
Paper #8                                                                             

Codeword Detection, Focusing on Differences in Similar Words Between Two Corpora of Microblogs

Takuro Hada, Yuichi Sei, Yasuyuki Tahara and Akihiko Ohsuga


Abstract: Recently, the use of microblogs in drug trafficking has surged and become a social problem. A common method applied by cyber patrols to repress crimes, such as drug trafficking, involves searching for crime-related keywords. However, criminals who post crime-inducing messages maximally exploit “codewords” rather than keywords, such as enjo kosai, marijuana, and methamphetamine, to camouflage their criminal intentions. Research suggests that these codewords change once they gain popularity; thus, effective codeword detection requires significant effort to keep track of the latest codewords. In this study, we focused on the appearance of codewords and those likely to be included in incriminating posts to detect codewords with a high likelihood of inclusion in incriminating posts. We proposed new methods for detecting codewords based on differences in word usage and conducted experiments on concealed-word detection to evaluate the effectiveness of the method. The results showed that the proposed method could detect concealed words other than those in the initial list and to a better degree than the baseline methods. These findings demonstrated the ability of the proposed method to rapidly and automatically detect codewords that change over time and blog posts that instigate crimes, thereby potentially reducing the burden of continuous codeword surveillance.


Keywords: Codewords Detect; Microblog; Twitter; Word Embedding.


Download Full Text


 
Paper #9                                                                             

ECustomer Choice Modelling: A Multi-Level Consensus Clustering Approach

Nicolas Pasquier and Sujoy Chatterjee


Abstract: Customer Choice Modeling aims to model the decision-making process of customers, or segments of customers, through their choices and preferences identified by the analysis of their behaviors in one or more specific contexts. Clustering techniques are used in this context to identify patterns in their choices and preferences, to define segments of customers with similar behaviors, and to model how customers of different segments respond to competing products and offers. However, data clustering is an unsupervised learning task by nature, that is the grouping of customers with similar behaviors in clusters must be performed without prior knowledge about the nature and the number of intrinsic groups of data instances, i.e., customers, in the data space. Thus, the choice of both the clustering algorithm used and its parameterization, and of the evaluation method used to assess the relevance of the resulting clusters are central issues. Consensus clustering, or ensemble clustering, aims to solve these issues by combining the results of different clustering algorithms and parameterizations to generate a more robust and relevant final clustering result. We present a Multi-level Consensus Clustering approach combining the results of several clustering algorithmic configurations to generate a hierarchy of consensus clusters in which each cluster represents an agreement between different clustering results. A closed sets based approach is used to identified relevant agreements, and a graphical hierarchical representation of the consensus cluster construction process and their inclusion relationships is provided to the end-user. This approach was developed and experimented in travel industry context with Amadeus SAS. Experiments show how it can provide a better segmentation, and refine the customer segments by identifying relevant sub-segments represented as sub-clusters in the hierarchical representation, for Customer Choice Modeling. The clustering of travelers was able to distinguish relevant segments of customers with similar needs and desires (i.e., customers purchasing tickets according to different criteria, like price, duration of flight, lay-over time, etc.) and at different levels of precision, which is a major issue for improving the personalization of recommendations in flight search queries.


Keywords: Consensus Clustering; Ensemble Clustering; Multi-level Clustering; Closed Sets; Clusters Hierarchy; Customer Choice Modelling.


Download Full Text

 
 International Association for Educators and Researchers (IAER), registered in England and Wales - Reg #OC418009                        Copyright © IAER 2019